208 research outputs found

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    Reducing social diabetes distress with a conversational agent support system: a three-week technology feasibility evaluation

    Get PDF
    BackgroundPeople with diabetes mellitus not only have to deal with physical health problems, but also with the psycho-social challenges their chronic disease brings. Currently, technological tools that support the psycho-social context of a patient have received little attention.ObjectiveThe objective of this work is to determine the feasibility and preliminary efficacy of an automated conversational agent to deliver, to people with diabetes, personalised psycho-education on dealing with (psycho-)social distress related to their chronic illness.MethodsIn a double-blinded between-subject study, 156 crowd-workers with diabetes received a social help program intervention in three sessions over three weeks. They were randomly assigned to receive support from either an interactive conversational support agent (n=79) or a self-help text from the book “Diabetes burnout” as a control condition (n=77). Participants completed the Diabetes Distress Scale (DDS) before and after the intervention, and after the intervention, the Client Satisfaction Questionnaire (CSQ-8), Feeling of Being Heard (FBH), and System Usability Scale (SUS).ResultsResults indicate that people using the conversational agent have a larger reduction in diabetes distress (M=−0.305, SD=0.865) than the control group (M=0.002, SD=0.743) and this difference is statistically significant (t(154)=2.377, p=0.019). A hypothesised mediation effect of “attitude to the social help program” was not observed.ConclusionsAn automated conversational agent can deliver personalised psycho-education on dealing with (psycho-)social distress to people with diabetes and reduce diabetes distress more than a self-help book.Ethics, Study Registration and Open ScienceThis study has been preregistered with the Open Science Foundation (osf.io/yb6vg) and has been accepted by the Human Research Ethics Committee - Delft University of Technology under application number 1130. The data and analysis script are available: https://surfdrive.surf.nl/files/index.php/s/4xSEHCrAu0HsJ4P

    Reminders make people adhere better to a self-help sleep intervention

    Get PDF
    The experiment presented in this paper investigated the effects of different kinds of reminders on adherence to automated parts of a cognitive behavioural therapy for insomnia (CBT-I) delivered via a mobile device. Previous studies report that computerized health interventions can be effective. However, treatment adherence is still an issue. Reminders are a simple technique that could improve adherence. A minimal intervention prototype in the realm of sleep treatment was developed to test the effects of reminders on adherence. Two prominent ways to determine the reminder-time are: a) ask users when they want to be reminded, and b) let an algorithm decide when to remind users. The prototype consisted of a sleep diary, a relaxation exercise and reminders. A within subject design was used in which the effect of reminders and two underlying principles were tested by 45 participants that all received the following three different conditions (in random order): a) event-based reminders b) time-based reminders c) no reminders. Both types of reminders improved adherence compared to no reminders. No differences were found between the two types of reminders. Opportunity and self-empowerment could partly mediate adherence to filling out the sleep diary, but not to the number of relaxation exercises conducted. Although the study focussed on CBT-I, we expect that designers of other computerized health interventions benefit from the tested opportunity and self-empowerment principles for reminders to improve adherence, as well

    How should a virtual agent present psychoeducation?

    Get PDF
    BACKGROUND AND OBJECTIVE: With the rise of autonomous e-mental health applications, virtual agents can play a major role in improving trustworthiness, therapy outcome and adherence. In these applications, it is important that patients adhere in the sense that they perform the tasks, but also that they adhere to the specific recommendations on how to do them well. One important construct in improving adherence is psychoeducation, information on the why and how of therapeutic interventions. In an e-mental health context, this can be delivered in two different ways: verbally by a (virtual) embodied conversational agent or just via text on the scree

    Using a conversational agent for thought recording as a cognitive therapy task: Feasibility, content, and feedback

    Get PDF
    E-mental health for depression is increasingly used in clinical practice, but patient adherence suffers as therapist involvement decreases. One reason may be the low responsiveness of existing programs: especially autonomous systems are lacking in their input interpretation and feedback-giving capabilities. Here, we explore (a) to what extent a more socially intelligent and, therefore, technologically advanced solution, namely a conversational agent, is a feasible means of collecting thought record data in dialog, (b) what people write about in their thought records, (c) whether providing content-based feedback increases motivation for thought recording, a core technique of cognitive therapy that helps patients gain an understanding of how their thoughts cause their feelings. Using the crowd-sourcing platform Prolific, 308 participants with subclinical depression symptoms were recruited and split into three conditions of varying feedback richness using the minimization method of randomization. They completed two thought recording sessions with the conversational agent: one practice session with scenarios and one open session using situations from their own lives. All participants were able to complete thought records with the agent such that the thoughts could be interpreted by the machine learning algorithm, rendering the completion of thought records with the agent feasible. Participants chose interpersonal situations nearly three times as often as achievement-related situations in the open chat session. The three most common underlying schemas were the Attachment, Competence, and Global Self-evaluation schemas. No support was found for a motivational effect of providing richer feedback. In addition to our findings, we publish the dataset of thought records for interested researchers and developers

    Considering patient safety in autonomous e-mental health systems - detecting risk situations and referring patients back to human care

    Get PDF
    Background: Digital health interventions can fill gaps in mental healthcare provision. However, autonomous e-mental health (AEMH) systems also present challenges for effective risk management. To balance autonomy and safety, AEMH systems need to detect risk situations and act on these appropriately. One option is sending automatic alerts to carers, but such 'auto-referral' could lead to missed cases or false alerts. Requiring users to actively self-refer offers an alternative, but this can also be risky as it relies on their motivation to do so. This study set out with two objectives. Firstly, to develop guidelines for risk detection and auto-referral systems. Secondly, to understand how persuasive techniques, mediated by a virtual agent, can facilitate self-referral. Methods: In a formative phase, interviews with experts, alongside a literature review, were used to develop a risk detection protocol. Two referral protocols were developed - one involving auto-referral, the other motivating users to self-refer. This latter was tested via crowd-sourcing (n = 160). Participants were asked to imagine they had sleeping problems with differing severity and user stance on seeking help. They then chatted with a virtual agent, who either directly facilitated referral, tried to persuade the user, or accepted that they did not want help. After the conversation, participants rated their intention to self-refer, to chat with the agent again, and their feeling of being heard by the agent. Results: Whether the virtual agent facilitated, persuaded or accepted, influenced all of these measures. Users who were initially negative or doubtful about self-referral could be persuaded. For users who were initially positive about seeking human care, this persuasion did not affect their intentions, indicating that a simply facilitating referral without persuasion was sufficient. Conclusion: This paper presents a protocol that elucidates the steps and decisions involved in risk detection, something that is relevant for all types of AEMH systems. In the case of self-referral, our study shows that a virtual agent can increase users' intention to self-refer. Moreover, the strategy of the agent influenced the intentions of the user afterwards. This highlights the importance of a personalised approach to promote the user's access to appropriate care.Interactive Intelligenc

    Content-based recommender support system for counselors in a suicide prevention chat helpline: Design and evaluation study

    Get PDF
    Background: The working environment of a suicide prevention helpline requires high emotional and cognitive awareness from chat counselors. A shared opinion among counselors is that as a chat conversation becomes more difficult, it takes more effort and a longer amount of time to compose a response, which, in turn, can lead to writer's block. Objective: This study evaluates and then designs supportive technology to determine if a support system that provides inspiration can help counselors resolve writer's block when they encounter difficult situations in chats with help-seekers. Methods: A content-based recommender system with sentence embedding was used to search a chat corpus for similar chat situations. The system showed a counselor the most similar parts of former chat conversations so that the counselor would be able to use approaches previously taken by their colleagues as inspiration. In a within-subject experiment, counselors' chat replies when confronted with a difficult situation were analyzed to determine if experts could see a noticeable difference in chat replies that were obtained in 3 conditions: (1) with the help of the support system, (2) with written advice from a senior counselor, or (3) when receiving no help. In addition, the system's utility and usability were measured, and the validity of the algorithm was examined. Results: A total of 24 counselors used a prototype of the support system; the results showed that, by reading chat replies, experts were able to significantly predict if counselors had received help from the support system or from a senior counselor (P=.004). Counselors scored the information they received from a senior counselor (M=1.46, SD 1.91) as significantly more helpful than the information received from the support system or when no help was given at all (M=-0.21, SD 2.26). Finally, compared with randomly selected former chat conversations, counselors rated the ones identified by the content-based recommendation system as significantly more similar to their current chats (β=.30, P<.001). Conclusions: Support given to counselors influenced how they responded in difficult conversations. However, the higher utility scores given for the advice from senior counselors seem to indicate that specific actionable instructions are preferred. We expect that these findings will be beneficial for developing a system that can use similar chat situations to generate advice in a descriptive style, hence helping counselors through writer's block
    corecore